20 research outputs found

    On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments

    Get PDF
    Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm.publishedVersio

    Clique-Based Neural Associative Memories

    Get PDF
    Auto-associative memories store a set of patterns and retrieve them by resorting to a part of their contents. This thesis focuses on developing and extending a type of associative memories relying on a sort of coded neural networks called clique-based neural networks. Background. Both associative memories and erasure correcting decoders deal with similar tasks that revolve around retrieving missing pieces of information. However, despite the similarity of their respective tasks, there is a gap in terms of efficiency and performance which motivates applying coding techniques in the design of associative memories. Clique-based neural networks, introduced by Gripon and Berrou, denote a family of associative memories that are inspired by biological considerations as well as concepts from information theory. The usage of error-correcting coding and decoding techniques, borrowed from the field of information theory, considerably boosts the performance of these associative memories. The proposed neural network is organized in clusters of interacting neurons such that patterns can be stored as neural cliques, which in turn can be seen as codewords of a code. The tournament-based neural network is an extension of clique-based neural networks with the ability to store sequences. In this model, sequences of any length can be stored as chains of tournaments. Both clique-based and tournament-based associative memories have considerably larger storage capacity than the Hopfield model, which is commonly considered as the benchmark model for associative memories. Contribution. The aim of this thesis is to advance the research area in associative memory by generalizing the concepts of clique-based and tournament-based neural networks. The generalization is expected to yield superior efficiency and retrieval performance. In this thesis, we use the following approaches. First, in Paper I, the coding techniques are used in two levels to enhance storage capacity and retrieval of partial erasures. In Paper II a modification to the structure of clique-based neural networks is proposed to enhance the error-tolerance of the memory. Lastly, in Paper III, a modified version of tournament-based neural networks is used for retrieval of a sequence from a given segment by means of forward and backward retrievals. Moreover, the sequence retrieval performance is enhanced with the new retrieval techniques. Discussion. We achieve the aim of generalizing the clique-based associative memories originally proposed by Gripon, Berrou, and co-authors to more resilient memories via using coding theory and graph theory approaches while maintaining their biologically plausible structures. The proposed models are quite flexible and can be employed collectively. Keywords. Neural Associative Memory, Content Addressable Memory, Error-Correcting Codes, Sparse Graphs, Sequence Storage, Clique-Based Neural Networks, Tournament-Based Neural Networks

    When Behavior Analysis Meets Machine Learning: Formation of Stimulus Equivalence Classes and Adaptive Learning in Artificial Agents

    No full text
    OsloMet Avhandling 2021 nr 3, Asieh Abolpour Mofrad. Dissertation for the degree of philosophiae doctor (PhD). Department of Behavioral Science. Faculty of Health Sciences. OsloMet - Oslo Metropolitan University. Spring 2021 ISSN 2535-471X (trykket) / ISSN 2535-5454 (online)ISBN 978-82-8364-282-7 (trykket) / ISBN 978-82-8364-292-6 (online

    On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments

    No full text
    Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm

    On Neural Associative Memory Structures: Storage and Retrieval of Sequences in a Chain of Tournaments

    No full text
    Associative memories enjoy many interesting properties in terms of error correction capabilities, robustness to noise, storage capacity, and retrieval performance, and their usage spans over a large set of applications. In this letter, we investigate and extend tournament-based neural networks, originally proposed by Jiang, Gripon, Berrou, and Rabbat (2016), a novel sequence storage associative memory architecture with high memory efficiency and accurate sequence retrieval. We propose a more general method for learning the sequences, which we call feedback tournament-based neural networks. The retrieval process is also extended to both directions: forward and backward—in other words, any large-enough segment of a sequence can produce the whole sequence. Furthermore, two retrieval algorithms, cache-winner and explore-winner, are introduced to increase the retrieval performance. Through simulation results, we shed light on the strengths and weaknesses of each algorithm

    Enhanced Equivalence Projective Simulation: A Framework for Modeling Formation of Stimulus Equivalence Classes

    No full text
    Formation of stimulus equivalence classes has been recently modeled through equivalence projective simulation (EPS), a modified version of a projective simulation (PS) learning agent. PS is endowed with an episodic memory that resembles the internal representation in the brain and the concept of cognitive maps. PS flexibility and interpretability enable the EPS model and, consequently the model we explore in this letter, to simulate a broad range of behaviors in matching-to-sample experiments. The episodic memory, the basis for agent decision making, is formed during the training phase. Derived relations in the EPS model that are not trained directly but can be established via the network's connections are computed on demand during the test phase trials by likelihood reasoning. In this letter, we investigate the formation of derived relations in the EPS model using network enhancement (NE), an iterative diffusion process, that yields an offline approach to the agent decision making at the testing phase. The NE process is applied after the training phase to denoise the memory network so that derived relations are formed in the memory network and retrieved during the testing phase. During the NE phase, indirect relations are enhanced, and the structure of episodic memory changes. This approach can also be interpreted as the agent's replay after the training phase, which is in line with recent findings in behavioral and neuroscience studies. In comparison with EPS, our model is able to model the formation of derived relations and other features such as the nodal effect in a more intrinsic manner. Decision making in the test phase is not an ad hoc computational method, but rather a retrieval and update process of the cached relations from the memory network based on the test trial. In order to study the role of parameters on agent performance, the proposed model is simulated and the results discussed through various experimental settings

    Enhanced Equivalence Projective Simulation: A Framework for Modeling Formation of Stimulus Equivalence Classes

    No full text
    Formation of stimulus equivalence classes has been recently modeled through equivalence projective simulation (EPS), a modified version of a projective simulation (PS) learning agent. PS is endowed with an episodic memory that resembles the internal representation in the brain and the concept of cognitive maps. PS flexibility and interpretability enable the EPS model and, consequently the model we explore in this letter, to simulate a broad range of behaviors in matching-to-sample experiments. The episodic memory, the basis for agent decision making, is formed during the training phase. Derived relations in the EPS model that are not trained directly but can be established via the network's connections are computed on demand during the test phase trials by likelihood reasoning. In this letter, we investigate the formation of derived relations in the EPS model using network enhancement (NE), an iterative diffusion process, that yields an offline approach to the agent decision making at the testing phase. The NE process is applied after the training phase to denoise the memory network so that derived relations are formed in the memory network and retrieved during the testing phase. During the NE phase, indirect relations are enhanced, and the structure of episodic memory changes. This approach can also be interpreted as the agent's replay after the training phase, which is in line with recent findings in behavioral and neuroscience studies. In comparison with EPS, our model is able to model the formation of derived relations and other features such as the nodal effect in a more intrinsic manner. Decision making in the test phase is not an ad hoc computational method, but rather a retrieval and update process of the cached relations from the memory network based on the test trial. In order to study the role of parameters on agent performance, the proposed model is simulated and the results discussed through various experimental settings

    On solving the SPL problem using the concept of probability flux

    No full text
    The Stochastic Point Location (SPL) problem [20] is a fundamental learning problem that has recently found a lot of research attention. SPL can be summarized as searching for an unknown point in an interval under faulty feed- back. The search is performed via a Learning Mechanism (LM) (algorithm) that interacts with a stochastic Environment which in turn informs it about the direc- tion of the search. Since the Environment is stochastic, the guidance for directions could be faulty. The first solution to the SPL problem, which was pioneered two decades ago by Oommen, relies on discretizing the search interval and perform- ing a controlled random walk on it. The state of the random walk at each step is considered to be the estimation of the point location. The convergence of the latter simplistic estimation strategy is proved for an infinite resolution, i.e., infinite memory. However, this strategy yields rather poor accuracy for low discretization resolutions. In this paper, we present two major contributions to the SPL problem. First, we demonstrate that the estimation of the point location can significantly be improved by resorting to the concept of mutual probability ux between neighboring states along the line. Second, we are able to accurately track the position of the optimal point and simultaneously show a method by which we can estimate the error probability characterizing the Environment. Interestingly, learning this error probability of the Environment takes place in tandem with the unknown location estimation. We present and analyze several experiments discussing the weaknesses and strengths of the different methods

    Solving Stochastic Point Location Problem in a Dynamic Environment with Weak Estimation

    No full text
    The Stochastic Point Location (SPL) problem introduced by Oommen [7] can be summarized as searching for an unknown point in the interval under a possibly faulty feedback. The search is performed via a Learning Mechanism (LM) (algorithm) that interacts with a stochastic environment which in turn informs it about the direction of the search. Since the environment is stochastic, the guidance for directions could be faulty. The first solution to the SPL problem which was pioneered by Oommen [7] two decades ago relies on discretizing the search interval and performing a controlled random walk on it. The state of the random walk at each step is considered to be the estimation of the point location. The convergence of the latter simplistic estimation strategy is proved for an infinite resolution. However, the latter strategy yields rather poor accuracy for low resolutions. In this paper, we present sophisticated tracking methods that outperform Oommen strategy [7]. Our methods revolve around tracking some key statistical properties of the underlying random walk using the family of weak estimators. Furthermore, we address the settings where the point location is non-stationary, i.e. LM is searching with uncertainty for a (possibly moving) point in an interval. In such settings, asymptotic results are no longer applicable. Simulation results show that the proposed methods outperform Oommen method for estimating point location by reducing the estimated error up to 75%
    corecore